11 research outputs found

    A Simulator of the OR-Parallel Token Machine

    Get PDF
    This report is mainly meant as the documentation of the simulator of the OR-Parallel Token Machine for Horn Clause programs. The simulator has been used to investigate the dynamic characteristics of pure Horn Clause programs and to evaluate several storage structures. We start by briefly describing the virtual machine. Then we discuss the merits of a specification language Meta IV and a programming language, Simula, and also show transformations between those two. Finally, we describe the constituent parts of the simulation system, namely the underlying message passing mechanism and the three components of the machine: instruction processor, token pool and storage. The mapping of the specification into Simula shows the power of using object oriented languages for implementing abstract specifications and for simulation purposes

    Performance evaluation of a storage model for OR-parallel execution of logic programs

    Get PDF
    As the next step towards a computer architecture for parallel execution of logic programs we have implemented four refinements of the basic storage model for OR-Parallelism and gathered data about their performance on two types of shared memory architectures, with and without local memories. The results show how the different properties of the implementations influence performance, and indicate that the implementations using hashing techniques (hash windows) will perform best, especially on systems with a global storage and caches. We rise the question of the usefulness of the simulation technique as a tool in developing new computer architectures. Our answer is that simulations can not give the ultimate answers to the design questions, but if only the judiciosly chosen parts of the machine are simulated on a detailed level, then the obtained results can give a very good guidance in making design choice

    Benchmarking implementations of functional languages with ‘Pseudoknot', a float-intensive benchmark

    Get PDF
    Over 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical' applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varie

    OR-parallel Prolog made efficient on shared memory multiprocessors

    No full text
    With the arrival of commercially available shared-memory multiprocessors, Prolog implementation efforts begin to shift from single processor architectures to the new ones. Among the main problems are efficient implementation of operations on variables and of task switching. Most of the solutions proposed so far suffer from expensive, non-constant time implementation of operations on variables. We propose a model (Versions-Vector Model) in which operations on all variables are constant time operations. The price we pay is a non-constant time of a task switch. As a remedy we propose two ways of decreasing that price. The first is promotion of variables on a task switch, from versions-vectors to the stack or heap, making subsequent task switches cheaper. The second is delayed installation of variables in versions-vectors, decreasing the cost of short branches. We believe that the increased memory consumption induced by our model can be accepted as it is traded for speed.Original report number R87006. Also published in Proceedings of the 1987 Symposium on Logic Programming, August 31-September 4, 1987, San Francisco, California, pp. 69-79, IEEE Computer Society Press.</p

    OR-parallel Prolog made efficient on shared memory multiprocessors

    No full text
    With the arrival of commercially available shared-memory multiprocessors, Prolog implementation efforts begin to shift from single processor architectures to the new ones. Among the main problems are efficient implementation of operations on variables and of task switching. Most of the solutions proposed so far suffer from expensive, non-constant time implementation of operations on variables. We propose a model (Versions-Vector Model) in which operations on all variables are constant time operations. The price we pay is a non-constant time of a task switch. As a remedy we propose two ways of decreasing that price. The first is promotion of variables on a task switch, from versions-vectors to the stack or heap, making subsequent task switches cheaper. The second is delayed installation of variables in versions-vectors, decreasing the cost of short branches. We believe that the increased memory consumption induced by our model can be accepted as it is traded for speed.Original report number R87006. Also published in Proceedings of the 1987 Symposium on Logic Programming, August 31-September 4, 1987, San Francisco, California, pp. 69-79, IEEE Computer Society Press.</p

    Pseudoknot: a Float-Intensive Benchmark for Functional Compilers

    No full text
    Over 20 implementations of different functional languages are compared using one program. Aspects studied are compile time and execution time. Another important point is how the program can be modified and tuned to obtain maximal performance on each language implementation. Finally, an interesting question is whether laziness is or is not beneficial for this application. With few exceptions, the compilers take a long time to compile this program. Compilers that generate C or Lisp are much slower than those that generate native code directly. Interestingly, there is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age for this application. The speed of C can even be approached by some implementations, but not without special measures such as strictness annotations. No implementation that has been tested actually equals the performance of C. 1 Introduction At the Dagstu..
    corecore